|
In probability theory, Dirichlet processes (after Peter Gustav Lejeune Dirichlet) are a family of stochastic processes whose realizations are probability distributions. In other words, a Dirichlet process is a probability distribution whose domain is itself a set of probability distributions. It is often used in Bayesian inference to describe the prior knowledge about the distribution of random variables, that is, how likely it is that the random variables are distributed according to one or another particular distribution. The Dirichlet process is specified by a base distribution and a positive real number called the concentration parameter. The base distribution is the expected value of the process, that is, the Dirichlet process draws distributions "around" the base distribution in the way that a normal distribution draws real numbers around its mean. However, even if the base distribution is continuous, the distributions drawn from the Dirichlet process are almost surely discrete. The concentration parameter specifies how strong this discretization is: in the limit of , the realizations are all concentrated on a single value, while in the limit of the realizations become continuous. In between the two extremes the realizations are discrete distributions with less and less concentration as increases. The Dirichlet process can also be seen as the infinite-dimensional generalization of the Dirichlet distribution. In the same way as the Dirichlet distribution is the conjugate prior for the categorical distribution, the Dirichlet process is the conjugate prior for infinite, nonparametric discrete distributions. A particularly important application of Dirichlet processes is as a prior probability distribution in infinite mixture models. The Dirichlet process was formally introduced by Thomas Ferguson in 1973 and has since been applied in data mining and machine learning, among others for natural language processing, computer vision and bioinformatics. ==Introduction== Dirichlet processes are usually used when modeling data that tends to repeat previous values in a "rich get richer" fashion. Specifically, suppose that the generation of values can be simulated by the following algorithm. :Input: (a probability distribution called base distribution), (a positive real number called concentration parameter) # Draw from the distribution . # For : ## With probability draw from . ## With probability set , where is the number of previous observations 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Dirichlet process」の詳細全文を読む スポンサード リンク
|